变压器架构已成为广泛的自然语言处理〜(NLP)模型的基本要素。随着大型NLP模型的趋势,增加的内存和计算成本阻碍了其在资源有限设备上的有效部署。因此,变压器量化吸引了广泛的研究兴趣。最近的工作认识到结构化的离群值是量化性能的关键瓶颈。但是,他们提出的方法增加了开销的计算,仍然将异常值留在那里。为了从根本上解决这个问题,本文深入研究了异常值的固有诱因和重要性。我们发现$ \ boldsymbol \ gamma $ in LaiserNorm(ln)充当异常值的有罪放大器,而异常值的重要性差异很大,其中一些代币提供的一些异常值覆盖了大面积,但可以牢固地夹住一个大面积,但可以将其夹住,而没有负面影响。 。在这些发现的激励下,我们提出了一个异常抑制框架,其中包括两个组成部分:伽玛迁移和象征性的剪辑。伽马迁移将异常放大器迁移到等效转换中的后续模块,从而导致更量化的模型而没有任何额外的负担。令牌的剪辑利用了令牌范围的较大差异,并设计了代币的粗到精细管道,以有效的方式获得了具有最小的最终量化损失的剪辑范围。该框架有效地抑制了异常值,可以在插件模式下使用。广泛的实验证明,我们的框架超过了现有作品,并且首次将6位训练后的BERT量化量化推向完整精确度(FP)级别。我们的代码可在https://github.com/wimh966/outlier_suppression上找到。
translated by 谷歌翻译
基于神经网络的驾驶规划师在改善自动驾驶的任务绩效方面表现出了巨大的承诺。但是,确保具有基于神经网络的组件的系统的安全性,尤其是在密集且高度交互式的交通环境中,这是至关重要的,但又具有挑战性。在这项工作中,我们为基于神经网络的车道更改提出了一个安全驱动的互动计划框架。为了防止过度保守计划,我们确定周围车辆的驾驶行为并评估其侵略性,然后以互动方式相应地适应了计划的轨迹。如果在预测的最坏情况下,即使存在安全的逃避轨迹,则自我车辆可以继续改变车道;否则,它可以停留在当前的横向位置附近或返回原始车道。我们通过广泛而全面的实验环境以及在自动驾驶汽车公司收集的现实情况下进行了广泛的模拟,定量证明了计划者设计的有效性及其优于基线方法的优势。
translated by 谷歌翻译
模型二进制化是一种压缩神经网络并加速其推理过程的有效方法。但是,1位模型和32位模型之间仍然存在显着的性能差距。实证研究表明,二进制会导致前进和向后传播中的信息损失。我们提出了一个新颖的分布敏感信息保留网络(DIR-NET),该网络通过改善内部传播和引入外部表示,将信息保留在前后传播中。 DIR-NET主要取决于三个技术贡献:(1)最大化二进制(IMB)的信息:最小化信息损失和通过重量平衡和标准化同时同时使用权重/激活的二进制误差; (2)分布敏感的两阶段估计器(DTE):通过共同考虑更新能力和准确的梯度来通过分配敏感的软近似来保留梯度的信息; (3)代表性二进制 - 意识蒸馏(RBD):通过提炼完整精确和二元化网络之间的表示来保留表示信息。 DIR-NET从统一信息的角度研究了BNN的前进过程和后退过程,从而提供了对网络二进制机制的新见解。我们的DIR-NET中的三种技术具有多功能性和有效性,可以在各种结构中应用以改善BNN。关于图像分类和客观检测任务的综合实验表明,我们的DIR-NET始终优于主流和紧凑型体系结构(例如Resnet,vgg,vgg,EfficityNet,darts和mobilenet)下最新的二进制方法。此外,我们在现实世界中的资源有限设备上执行DIR-NET,该设备可实现11.1倍的存储空间和5.4倍的速度。
translated by 谷歌翻译
最近,生成的数据无量子化作为一种​​实用的方法,将神经网络压缩到低位宽度而不访问真实数据。它通过利用其全精密对应物的批量归一化(BN)统计来生成数据来量化网络。然而,我们的研究表明,在实践中,BN统计的合成数据在分布和样品水平时严重均匀化,这导致量化网络的严重劣化。本文提出了一种通用不同的样本生成(DSG)方案,用于生成无数据的训练后量化和量化感知培训,以减轻有害的均质化。在我们的DSG中,我们首先将统计对齐缩写为BN层中的功能,以放宽分配约束。然后,我们加强特定BN层对不同样品的损失影响,并抑制了生成过程中样品之间的相关性,分别从统计和空间角度分别多样化样本。广泛的实验表明,对于大规模的图像分类任务,我们的DSG可以始终如一地优于各种神经结构上的现有数据无数据量化方法,尤其是在超低比特宽度下(例如,在W4A4设置下的22%的增益下)。此外,由我们的DSG引起的数据多样化引起了各种量化方法的一般增益,证明了多样性是无数据量化的高质量合成数据的重要特性。
translated by 谷歌翻译
量化已成为压缩和加速神经网络最普遍的方法之一。最近,无数据量化已被广泛研究作为实用和有前途的解决方案。它根据FP32批量归一化(BN)统计,合成校准量化模型的数据,并显着降低了传统量化方法中实际训练数据的沉重依赖性。不幸的是,我们发现在实践中,BN统计的合成数据在分配水平和样品水平上具有严重均匀化,并且进一步引起量化模型的显着性能下降。我们提出了各种样品生成(DSG)方案,以减轻均质化引起的不利影响。具体而言,我们松弛BN层中的特征统计的对准,以在分配水平处放宽约束,并设计一个层状增强,以加强针对不同的数据样本的特定层。我们的DSG方案是多功能的,甚至能够应用于现代训练后的训练后的量化方法,如亚马逊。我们评估大规模图像分类任务的DSG方案,并始终如一地获得各种网络架构和量化方法的显着改进,特别是当量化到较低位时(例如,在W4A4上的高达22%)。此外,从增强的多样性受益,综合数据校准的模型均接近通过实际数据校准的那些,甚至在W4A4上越优于它们。
translated by 谷歌翻译
图形对比学习已成为无监督图表示学习的强大工具。图形对比学习成功的关键是获取高质量的正和负样本作为对比对,以学习输入图的基础结构语义。最近的作品通常从同一训练批次的阳性样品或外部无关图中采样负样品。但是,一个重要的限制在于此类策略,这是对假阴性样本进行采样的不可避免的问题。在本文中,我们提出了一种新颖的方法来利用\ textbf {c} ounterfactual机制来生成\ textbf {g} raph \ textbf {c} ontrastive学习的人造硬性样本与那些基于抽样的策略相比,观点。我们利用反事实机制来产生硬性样品,从而确保生成的样品与类似,但具有与正样品不同的标签。与一些传统的无监督图学习方法和一些SOTA图对比度学习方法相比,所提出的方法在几个数据集上获得了令人满意的结果。我们还进行了一些补充实验,为提出的方法提供了广泛的说明,包括具有不同硬性样品的CGC的性能以及对具有不同相似性测量的硬性阴性样品的评估。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译